We present a self-training approach to unsupervised dependency parsing thatreuses existing supervised and unsupervised parsing algorithms. Our approach,called `iterated reranking' (IR), starts with dependency trees generated by anunsupervised parser, and iteratively improves these trees using the richerprobability models used in supervised parsing that are in turn trained on thesetrees. Our system achieves 1.8% accuracy higher than the state-of-the-partparser of Spitkovsky et al. (2013) on the WSJ corpus.
展开▼